293 research outputs found
Deeply-Supervised CNN for Prostate Segmentation
Prostate segmentation from Magnetic Resonance (MR) images plays an important
role in image guided interven- tion. However, the lack of clear boundary
specifically at the apex and base, and huge variation of shape and texture
between the images from different patients make the task very challenging. To
overcome these problems, in this paper, we propose a deeply supervised
convolutional neural network (CNN) utilizing the convolutional information to
accurately segment the prostate from MR images. The proposed model can
effectively detect the prostate region with additional deeply supervised layers
compared with other approaches. Since some information will be abandoned after
convolution, it is necessary to pass the features extracted from early stages
to later stages. The experimental results show that significant segmentation
accuracy improvement has been achieved by our proposed method compared to other
reported approaches.Comment: Due to a crucial sign error in equation
Risk stratification of prostate cancer utilizing apparent diffusion coefficient value and lesion volume on multiparametric MRI
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/136035/1/jmri25363_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/136035/2/jmri25363.pd
GazeGNN: A Gaze-Guided Graph Neural Network for Disease Classification
The application of eye-tracking techniques in medical image analysis has
become increasingly popular in recent years. It collects the visual search
patterns of the domain experts, containing much important information about
health and disease. Therefore, how to efficiently integrate radiologists' gaze
patterns into the diagnostic analysis turns into a critical question. Existing
works usually transform gaze information into visual attention maps (VAMs) to
supervise the learning process. However, this time-consuming procedure makes it
difficult to develop end-to-end algorithms. In this work, we propose a novel
gaze-guided graph neural network (GNN), GazeGNN, to perform disease
classification from medical scans. In GazeGNN, we create a unified
representation graph that models both the image and gaze pattern information.
Hence, the eye-gaze information is directly utilized without being converted
into VAMs. With this benefit, we develop a real-time, real-world, end-to-end
disease classification algorithm for the first time and avoid the noise and
time consumption introduced during the VAM preparation. To our best knowledge,
GazeGNN is the first work that adopts GNN to integrate image and eye-gaze data.
Our experiments on the public chest X-ray dataset show that our proposed method
exhibits the best classification performance compared to existing methods
Assessing and testing anomaly detection for finding prostate cancer in spatially registered multi-parametric MRI
BackgroundEvaluating and displaying prostate cancer through non-invasive imagery such as Multi-Parametric MRI (MP-MRI) bolsters management of patients. Recent research quantitatively applied supervised target algorithms using vectoral tumor signatures to spatially registered T1, T2, Diffusion, and Dynamic Contrast Enhancement images. This is the first study to apply the Reed-Xiaoli (RX) multi-spectral anomaly detector (unsupervised target detector) to prostate cancer, which searches for voxels that depart from the background normal tissue, and detects aberrant voxels, presumably tumors.MethodsMP-MRI (T1, T2, diffusion, dynamic contrast-enhanced images, or seven components) were prospectively collected from 26 patients and then resized, translated, and stitched to form spatially registered multi-parametric cubes. The covariance matrix (CM) and mean Ī¼ were computed from background normal tissue. For RX, noise was reduced for the CM by filtering out principal components (PC), regularization, and elliptical envelope minimization. The RX images were compared to images derived from the threshold Adaptive Cosine Estimator (ACE) and quantitative color analysis. Receiver Operator Characteristic (ROC) curves were used for RX and reference images. To quantitatively assess algorithm performance, the Area Under the Curve (AUC) and the Youden Index (YI) points for the ROC curves were computed.ResultsThe patient average for the AUC and [YI] from ROC curves for RX from filtering 3 and 4 PC was 0.734[0.706] and 0.727[0.703], respectively, relative to the ACE images. The AUC[YI] for RX from modified Regularization was 0.638[0.639], Regularization 0.716[0.690], elliptical envelope minimization 0.544[0.597], and unprocessed CM 0.581[0.608] using the ACE images as Reference Image. The AUC[YI] for RX from filtering 3 and 4 PC was 0.742[0.711] and 0.740[0.708], respectively, relative to the quantitative color images. The AUC[YI] for RX from modified Regularization was 0.643[0.648], Regularization 0.722[0.695], elliptical envelope minimization 0.508[0.605], and unprocessed CM 0.569[0.615] using the color images as Reference Image. All standard errors were less than 0.020.ConclusionsThis first study of spatially registered MP-MRI applied anomaly detection using RX, an unsupervised target detection algorithm for prostate cancer. For RX, filtering out PC and applying Regularization achieved higher AUC and YI using ACE and color images as references than unprocessed CM, modified Regularization, and elliptical envelope minimization
- ā¦